10 research outputs found

    Defining Asymptotic Parallel Time Complexity of Data-dependent Algorithms

    Get PDF
    The scientific research community has reached a stage of maturity where its strong need for high-performance computing has diffused into also everyday life of engineering and industry algorithms. In efforts to satisfy this need, parallel computers provide an efficient and economical way to solve large-scale and/or time-constrained problems. As a consequence, the end-users of these systems have a vested interest in defining the asymptotic time complexity of parallel algorithms to predict their performance on a particular parallel computer. The asymptotic parallel time complexity of data-dependent algorithms depends on the number of processors, data size, and other parameters. Discovering the main other parameters is a challenging problem and the clue in obtaining a good estimate of performance order. Great examples of these types of applications are sorting algorithms, searching algorithms and solvers of the traveling salesman problem (TSP). This article encompasses all the knowledge discovery aspects to the problem of defining the asymptotic parallel time complexity of datadependent algorithms. The knowledge discovery methodology begins by designing a considerable number of experiments and measuring their execution times. Then, an interactive and iterative process explores data in search of patterns and/or relationships detecting some parameters that affect performance. Knowing the key parameters which characterise time complexity, it becomes possible to hypothesise to restart the process and to produce a subsequent improved time complexity model. Finally, the methodology predicts the performance order for new data sets on a particular parallel computer by replacing a numerical identification. As a case of study, a global pruning traveling salesman problem implementation (GP-TSP) has been chosen to analyze the influence of indeterminism in performance prediction of data-dependent parallel algorithms, and also to show the usefulness of the defined knowledge discovery methodology. The subsequent hypotheses generated to define the asymptotic parallel time complexity of the TSP were corroborated one by one. The experimental results confirm the expected capability of the proposed methodology; the predictions of performance time order were rather good comparing with real execution time (in the order of 85%)

    z/WORKSPACE

    Get PDF
    El proyecto presentado tiene como finalidad definir e implementar un sistema de gesti贸n documental (CMS) a nivel inform谩tico para la empresa FIATC Seguros que proporcione los m茅todos necesarios para la buena gesti贸n de la documentaci贸n.El projecte presentat t茅 com a finalitat definir i implementar un sistema de gesti贸 documental (CMS) a nivell inform脿tic per a l'empresa FIATC Seguros, que proporcioni els m猫todes necessaris per la bona gesti贸 de la documentaci贸

    Un modelo anal铆tico para la predicci贸n del rendimiento en reconstrucci贸n tomogr谩fica.

    Get PDF
    Los estudios tridimensionales (3D) de espec铆menes biol贸gicos a niveles subcelulares han sido posible gracias a la tomograf铆a electr贸nica, al procesamiento de im谩genes y a las t茅cnicas de reconstrucci贸n 3D. Para lograr la demanda de los requerimientos computacionales de grandes vol煤menes, se han aplicado estrategias de paralelizaci贸n con descomposici贸n de dominio. Aunque esta combinaci贸n ya ha probado ser 煤til para tomograf铆a electr贸nica de espec铆menes biol贸gicos, un modelo de predicci贸n de rendimiento a煤n no ha sido descrito. Tal modelo deber铆a permitir conocer la aplicaci贸n paralela y predecir su funcionamiento bajo diferentes par谩metros o plataformas hardware. Este art铆culo describe un modelo anal铆tico de predicci贸n de rendimiento para BPTomo, una aplicaci贸n paralela para reconstrucci贸n tomogr谩fica. El funcionamiento de la aplicaci贸n es analizado paso a paso para crear una formulaci贸n anal铆tica del problema. El modelo es validado comparando los tiempos estimados para conjuntos de datos representativos con tiempos medidos en un cluster tipo beowulf.Eje: II - Workshop de computaci贸n gr谩fica, im谩genes y visualizaci贸nRed de Universidades con Carreras en Inform谩tica (RedUNCI

    Un modelo anal铆tico para la predicci贸n del rendimiento en reconstrucci贸n tomogr谩fica.

    Get PDF
    Los estudios tridimensionales (3D) de espec铆menes biol贸gicos a niveles subcelulares han sido posible gracias a la tomograf铆a electr贸nica, al procesamiento de im谩genes y a las t茅cnicas de reconstrucci贸n 3D. Para lograr la demanda de los requerimientos computacionales de grandes vol煤menes, se han aplicado estrategias de paralelizaci贸n con descomposici贸n de dominio. Aunque esta combinaci贸n ya ha probado ser 煤til para tomograf铆a electr贸nica de espec铆menes biol贸gicos, un modelo de predicci贸n de rendimiento a煤n no ha sido descrito. Tal modelo deber铆a permitir conocer la aplicaci贸n paralela y predecir su funcionamiento bajo diferentes par谩metros o plataformas hardware. Este art铆culo describe un modelo anal铆tico de predicci贸n de rendimiento para BPTomo, una aplicaci贸n paralela para reconstrucci贸n tomogr谩fica. El funcionamiento de la aplicaci贸n es analizado paso a paso para crear una formulaci贸n anal铆tica del problema. El modelo es validado comparando los tiempos estimados para conjuntos de datos representativos con tiempos medidos en un cluster tipo beowulf.Eje: II - Workshop de computaci贸n gr谩fica, im谩genes y visualizaci贸nRed de Universidades con Carreras en Inform谩tica (RedUNCI

    BECOME: A Modular Recommender System for Coaching and Promoting Empowerment in Healthcare

    Get PDF
    In this chapter, we present BECOME (Behavior Change recOMender systEm), a modular Recommender System built to cope with issues like personalization, adaptation, and delivery of contents pertinently designed to solve idiosyncrasies of various topics in the healthcare field. The main objective is to empower citizens or patients to make informed decisions to improve their health condition. It deals with a double-edged personalization process as one of the key aspects to fostering self-empowerment: content dynamically personalized and adapted as new information is gathered and flexibility in the strategies and timings of the delivery. Thus, we take personalization one step further by not only tailoring the content, which is the standard customization strategy, but also adapting its timings and complexity in a dynamic manner while dealing with the feeling of having an entity (the coach) behind, ready to help. To show the modularity of the system and the diverse ways of interaction, different studies representing various use cases are presented

    Defining Asymptotic Parallel Time Complexity of Data-dependent Algorithms

    No full text
    The scientific research community has reached a stage of maturity where its strong need for high-performance computing has diffused into also everyday life of engineering and industry algorithms. In efforts to satisfy this need, parallel computers provide an efficient and economical way to solve large-scale and/or time-constrained problems. As a consequence, the end-users of these systems have a vested interest in defining the asymptotic time complexity of parallel algorithms to predict their performance on a particular parallel computer. The asymptotic parallel time complexity of data-dependent algorithms depends on the number of processors, data size, and other parameters. Discovering the main other parameters is a challenging problem and the clue in obtaining a good estimate of performance order. Great examples of these types of applications are sorting algorithms, searching algorithms and solvers of the traveling salesman problem (TSP). This article encompasses all the knowledge discovery aspects to the problem of defining the asymptotic parallel time complexity of datadependent algorithms. The knowledge discovery methodology begins by designing a considerable number of experiments and measuring their execution times. Then, an interactive and iterative process explores data in search of patterns and/or relationships detecting some parameters that affect performance. Knowing the key parameters which characterise time complexity, it becomes possible to hypothesise to restart the process and to produce a subsequent improved time complexity model. Finally, the methodology predicts the performance order for new data sets on a particular parallel computer by replacing a numerical identification. As a case of study, a global pruning traveling salesman problem implementation (GP-TSP) has been chosen to analyze the influence of indeterminism in performance prediction of data-dependent parallel algorithms, and also to show the usefulness of the defined knowledge discovery methodology. The subsequent hypotheses generated to define the asymptotic parallel time complexity of the TSP were corroborated one by one. The experimental results confirm the expected capability of the proposed methodology; the predictions of performance time order were rather good comparing with real execution time (in the order of 85%)

    z/WORKSPACE

    No full text
    El proyecto presentado tiene como finalidad definir e implementar un sistema de gesti贸n documental (CMS) a nivel inform谩tico para la empresa FIATC Seguros que proporcione los m茅todos necesarios para la buena gesti贸n de la documentaci贸n.El projecte presentat t茅 com a finalitat definir i implementar un sistema de gesti贸 documental (CMS) a nivell inform脿tic per a l'empresa FIATC Seguros, que proporcioni els m猫todes necessaris per la bona gesti贸 de la documentaci贸

    驴Podemos predecir en algoritmos paralelos no deterministas?

    No full text
    驴Podemos predecir en algoritmos paralelos no deterministas? S铆, es posible. A lo largo de este documento se demostrar谩 esta afirmaci贸n. La evaluaci贸n cient铆fica de algoritmos para solucionar todo tipo de problemas es uno de los puntos clave en ciencias de la computaci贸n. En el terreno de ciencia computacional (una disciplina emergente) continuamente surgen nuevos desaf铆os
    corecore